Reformulating least mean squares in the data domain
نویسندگان
چکیده
منابع مشابه
Unifying Least Squares, Total Least Squares and Data Least Squares
The standard approaches to solving overdetermined linear systems Ax ≈ b construct minimal corrections to the vector b and/or the matrix A such that the corrected system is compatible. In ordinary least squares (LS) the correction is restricted to b, while in data least squares (DLS) it is restricted to A. In scaled total least squares (Scaled TLS) [15], corrections to both b and A are allowed, ...
متن کاملLeast Squares Matching in the Transform Domain
Because of the large image size of aerial and satellite imagery used in digital photogrammetry, the use of image compression to reduce the amount of storage space has become more common. Normally this would add an additional operation of decompression before photogrammetric processing could be carried out. This paper describes a method whereby image matching can be conducted without the need to...
متن کاملBayesian Extensions of Kernel Least Mean Squares
The kernel least mean squares (KLMS) algorithm is a computationally efficient nonlinear adaptive filtering method that “kernelizes” the celebrated (linear) least mean squares algorithm. We demonstrate that the least mean squares algorithm is closely related to the Kalman filtering, and thus, the KLMS can be interpreted as an approximate Bayesian filtering method. This allows us to systematicall...
متن کاملHard Threshold Least Mean Squares Algorithm
This work presents a new variation of the commonly used Least Mean Squares Algorithm (LMS) for the identification of sparse signals with an a-priori known sparsity using a hard threshold operator in every iteration. It examines some useful properties of the algorithm and compares it with the traditional LMS and other sparsity aware variations of the same algorithm. It goes on to examine the app...
متن کاملLeast Squares Fitting of Data
This is the usual introduction to least squares fit by a line when the data represents measurements where the y–component is assumed to be functionally dependent on the x–component. Given a set of samples {(xi, yi)}i=1, determine A and B so that the line y = Ax + B best fits the samples in the sense that the sum of the squared errors between the yi and the line values Axi + B is minimized. Note...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Signal Processing Magazine
سال: 2002
ISSN: 1053-5888
DOI: 10.1109/msp.2002.998082